The returns are weakly stationary with little instances of high correlations. p=4, q=4. It needs ARIMA model.
Code
ggAcf(returns_ups^2, na.action = na.pass)
Code
ggPacf(returns_ups^2, na.action = na.pass)
The squared returns show significant correlations at lag orders p=1 to 6 and q=0 to 5. Given this observation, it would be more appropriate to utilize an GARCH model.
The returns are weakly stationary with little instances of high correlations. p=1,4, q=1,4,5. It needs ARIMA model.
Code
ggAcf(returns_jbht^2, na.action = na.pass)
Code
ggPacf(returns_jbht^2, na.action = na.pass)
The squared returns show significant correlations at lag orders p=1 to 6 and q=0 to 5. Given this observation, it would be more appropriate to utilize an GARCH model.
p d q AIC BIC AICc
63 5 0 3 -20235.07 -20166.99 -20235
Code
temp[which.min(temp$BIC),]
p d q AIC BIC AICc
2 0 1 0 -20223.76 -20211.38 -20223.76
Code
temp[which.min(temp$AICc),]
p d q AIC BIC AICc
63 5 0 3 -20235.07 -20166.99 -20235
The lowest AIC model is ARIMA(5,0,3).
Code
# model diagnosticssarima(log.ups, 0,1,0)
initial value -4.226790
iter 1 value -4.226790
final value -4.226790
converged
initial value -4.226790
iter 1 value -4.226790
final value -4.226790
converged
<><><><><><><><><><><><><><>
Coefficients:
Estimate SE t.value p.value
constant 4e-04 2e-04 1.5558 0.1198
sigma^2 estimated as 0.0002131361 on 3601 degrees of freedom
AIC = -5.614592 AICc = -5.614592 BIC = -5.611155
Code
sarima(log.ups, 5,0,3)
initial value -0.791335
iter 2 value -4.047293
iter 3 value -4.051085
iter 4 value -4.213259
iter 5 value -4.227001
iter 6 value -4.227350
iter 7 value -4.227629
iter 8 value -4.227652
iter 9 value -4.227683
iter 10 value -4.227828
iter 11 value -4.228169
iter 12 value -4.228771
iter 13 value -4.229169
iter 14 value -4.229524
iter 15 value -4.229703
iter 16 value -4.229747
iter 17 value -4.229753
iter 18 value -4.229755
iter 19 value -4.229758
iter 20 value -4.229773
iter 21 value -4.229773
iter 22 value -4.229776
iter 23 value -4.229782
iter 24 value -4.229784
iter 25 value -4.229791
iter 26 value -4.229804
iter 27 value -4.229837
iter 28 value -4.229880
iter 29 value -4.229910
iter 30 value -4.229917
iter 31 value -4.229918
iter 32 value -4.229922
iter 33 value -4.229930
iter 34 value -4.229940
iter 35 value -4.229947
iter 36 value -4.229950
iter 37 value -4.229951
iter 37 value -4.229951
iter 37 value -4.229951
final value -4.229951
converged
initial value -4.227259
iter 2 value -4.227279
iter 3 value -4.227291
iter 4 value -4.227344
iter 5 value -4.227658
iter 6 value -4.227782
iter 7 value -4.227852
iter 8 value -4.227859
iter 9 value -4.227863
iter 10 value -4.227898
iter 11 value -4.227936
iter 12 value -4.227982
iter 13 value -4.228029
iter 14 value -4.228041
iter 15 value -4.228042
iter 16 value -4.228043
iter 16 value -4.228043
iter 16 value -4.228043
final value -4.228043
converged
<><><><><><><><><><><><><><>
Coefficients:
Estimate SE t.value p.value
ar1 0.6566 0.0934 7.0334 0.0000
ar2 -0.0843 0.1258 -0.6704 0.5026
ar3 -0.3276 0.1247 -2.6274 0.0086
ar4 0.7026 0.1028 6.8364 0.0000
ar5 0.0519 0.0178 2.9196 0.0035
ma1 0.3287 0.0924 3.5571 0.0004
ma2 0.4277 0.0670 6.3827 0.0000
ma3 0.7537 0.0958 7.8708 0.0000
xmean 4.4820 0.4471 10.0255 0.0000
sigma^2 estimated as 0.0002121721 on 3594 degrees of freedom
AIC = -5.612658 AICc = -5.612644 BIC = -5.595479
According to the model diagnostics, ARIMA(0,1,0) is the better model.
Upon inspecting the Standardized Residuals plot of the model, it is evident that there is still significant variation or volatility remaining. Further modeling is required to address this issue.
p d q AIC BIC AICc
29 3 0 2 -19376.74 -19327.23 -19376.7
Code
temp[which.min(temp$BIC),]
p d q AIC BIC AICc
2 0 1 0 -19352.54 -19340.16 -19352.54
Code
temp[which.min(temp$AICc),]
p d q AIC BIC AICc
29 3 0 2 -19376.74 -19327.23 -19376.7
The lowest AIC model is ARIMA(3,0,2).
Code
# model diagnosticssarima(log.jbht, 0,1,0)
initial value -4.105854
iter 1 value -4.105854
final value -4.105854
converged
initial value -4.105854
iter 1 value -4.105854
final value -4.105854
converged
<><><><><><><><><><><><><><>
Coefficients:
Estimate SE t.value p.value
constant 5e-04 3e-04 1.7461 0.0809
sigma^2 estimated as 0.0002714567 on 3601 degrees of freedom
AIC = -5.372721 AICc = -5.37272 BIC = -5.369284
Code
sarima(log.jbht, 3,0,2)
initial value -0.617691
iter 2 value -0.655217
iter 3 value -0.826342
iter 4 value -1.686839
iter 5 value -2.481529
iter 6 value -3.329225
iter 7 value -3.687963
iter 8 value -3.838591
iter 9 value -4.010884
iter 10 value -4.071998
iter 11 value -4.090821
iter 12 value -4.104386
iter 13 value -4.105705
iter 14 value -4.105798
iter 15 value -4.105801
iter 16 value -4.105802
iter 17 value -4.105821
iter 18 value -4.105856
iter 19 value -4.105931
iter 20 value -4.106019
iter 21 value -4.106067
iter 22 value -4.106076
iter 23 value -4.106078
iter 24 value -4.106084
iter 25 value -4.106299
iter 26 value -4.106407
iter 27 value -4.107417
iter 28 value -4.107432
iter 29 value -4.107436
iter 30 value -4.107503
iter 31 value -4.107507
iter 32 value -4.107510
iter 32 value -4.107510
final value -4.107510
converged
initial value -4.106135
iter 2 value -4.106138
iter 3 value -4.106168
iter 4 value -4.106482
iter 5 value -4.106639
iter 6 value -4.106712
iter 7 value -4.106727
iter 8 value -4.106737
iter 9 value -4.106741
iter 10 value -4.106761
iter 11 value -4.106799
iter 12 value -4.106864
iter 13 value -4.107027
iter 14 value -4.107143
iter 15 value -4.107304
iter 16 value -4.107306
iter 17 value -4.107383
iter 18 value -4.107387
iter 19 value -4.107419
iter 20 value -4.107425
iter 21 value -4.107428
iter 22 value -4.107431
iter 23 value -4.107456
iter 24 value -4.107467
iter 25 value -4.107499
iter 26 value -4.107538
iter 27 value -4.107573
iter 28 value -4.107575
iter 29 value -4.107576
iter 30 value -4.107577
iter 31 value -4.107578
iter 32 value -4.107581
iter 33 value -4.107583
iter 34 value -4.107584
iter 35 value -4.107584
iter 36 value -4.107584
iter 37 value -4.107584
iter 38 value -4.107585
iter 39 value -4.107587
iter 40 value -4.107592
iter 41 value -4.107594
iter 42 value -4.107596
iter 43 value -4.107597
iter 43 value -4.107597
iter 43 value -4.107597
final value -4.107597
converged
<><><><><><><><><><><><><><>
Coefficients:
Estimate SE t.value p.value
ar1 -0.6404 0.0703 -9.1152 0e+00
ar2 0.8330 0.0343 24.2844 0e+00
ar3 0.8071 0.0775 10.4137 0e+00
ma1 1.6085 0.0759 21.1901 0e+00
ma2 0.7635 0.0835 9.1443 0e+00
xmean 4.4604 1.2830 3.4766 5e-04
sigma^2 estimated as 0.0002698608 on 3597 degrees of freedom
AIC = -5.373432 AICc = -5.373425 BIC = -5.361407
According to the model diagnostics, ARIMA(3,0,2) is the better model.
Upon inspecting the Standardized Residuals plot of the model, it is evident that there is still significant variation or volatility remaining. Further modeling is required to address this issue.
Series: log.ups
ARIMA(0,1,0)
sigma^2 = 0.0002133: log likelihood = 10112.66
AIC=-20223.33 AICc=-20223.33 BIC=-20217.14
Training set error measures:
ME RMSE MAE MPE MAPE
Training set 0.0003802395 0.01460221 0.009702943 0.008376692 0.2163833
MASE ACF1
Training set 0.9998262 -0.01543624
Code
res.ups <- fit.ups$resggtsdisplay(res.ups^2)
The squared returns show significant correlations at lag orders p=1 to 6 and q=0 to 6. Given this observation, it would be more appropriate to utilize an GARCH model.
Code
model <-list() ## set countercc <-1for (p in1:6) {for (q in1:6) {model[[cc]] <-garch(res.ups,order=c(q,p),trace=F)cc <- cc +1}} ## get AIC values for model evaluationGARCH_AIC <-sapply(model, AIC) ## model with lowest AIC is the bestwhich(GARCH_AIC ==min(GARCH_AIC))
Series: log.jbht
ARIMA(3,0,2) with non-zero mean
Coefficients:
ar1 ar2 ar3 ma1 ma2 mean
-0.6404 0.8330 0.8071 1.6085 0.7635 4.4604
s.e. 0.0703 0.0343 0.0775 0.0759 0.0835 1.2830
sigma^2 = 0.0002703: log likelihood = 9687.24
AIC=-19360.47 AICc=-19360.44 BIC=-19317.15
Training set error measures:
ME RMSE MAE MPE MAPE MASE
Training set 0.0004864017 0.01642744 0.01182566 0.01080249 0.2688679 0.9994574
ACF1
Training set -0.0006273293
Code
res.jbht <- fit.jbht$resggtsdisplay(res.jbht^2)
The squared returns show significant correlations at lag orders p=1 to 6 and q=0 to 6. Given this observation, it would be more appropriate to utilize an GARCH model.
Code
model <-list() ## set countercc <-1for (p in1:6) {for (q in1:6) {model[[cc]] <-garch(res.jbht,order=c(q,p),trace=F)cc <- cc +1}} ## get AIC values for model evaluationGARCH_AIC <-sapply(model, AIC) ## model with lowest AIC is the bestwhich(GARCH_AIC ==min(GARCH_AIC))
Series: log.ups
ARIMA(0,1,0)
sigma^2 = 0.0002133: log likelihood = 10112.66
AIC=-20223.33 AICc=-20223.33 BIC=-20217.14
Training set error measures:
ME RMSE MAE MPE MAPE
Training set 0.0003802395 0.01460221 0.009702943 0.008376692 0.2163833
MASE ACF1
Training set 0.9998262 -0.01543624
Code
summary(garchFit(~garch(1,4), res.ups,trace = F))
Title:
GARCH Modelling
Call:
garchFit(formula = ~garch(1, 4), data = res.ups, trace = F)
Mean and Variance Equation:
data ~ garch(1, 4)
<environment: 0x14d8f3650>
[data = res.ups]
Conditional Distribution:
norm
Coefficient(s):
mu omega alpha1 beta1 beta2 beta3
3.3642e-04 1.9620e-06 6.0924e-02 1.5463e-01 1.0000e-08 2.3361e-01
beta4
5.4340e-01
Std. Errors:
based on Hessian
Error Analysis:
Estimate Std. Error t value Pr(>|t|)
mu 3.364e-04 2.045e-04 1.645 0.099981 .
omega 1.962e-06 5.720e-07 3.430 0.000604 ***
alpha1 6.092e-02 9.206e-03 6.618 3.65e-11 ***
beta1 1.546e-01 1.178e-01 1.313 0.189252
beta2 1.000e-08 1.021e-01 0.000 1.000000
beta3 2.336e-01 1.594e-01 1.466 0.142684
beta4 5.434e-01 1.104e-01 4.921 8.62e-07 ***
---
Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
Log Likelihood:
10413.6 normalized: 2.890258
Description:
Sat Apr 27 18:32:04 2024 by user:
Standardised Residuals Tests:
Statistic p-Value
Jarque-Bera Test R Chi^2 1.742714e+04 0.0000000
Shapiro-Wilk Test R W 9.084315e-01 0.0000000
Ljung-Box Test R Q(10) 1.277628e+01 0.2364505
Ljung-Box Test R Q(15) 2.192937e+01 0.1096694
Ljung-Box Test R Q(20) 2.554784e+01 0.1812697
Ljung-Box Test R^2 Q(10) 5.209197e+00 0.8767724
Ljung-Box Test R^2 Q(15) 6.245968e+00 0.9753132
Ljung-Box Test R^2 Q(20) 7.690850e+00 0.9937235
LM Arch Test R TR^2 5.539211e+00 0.9375089
Information Criterion Statistics:
AIC BIC SIC HQIC
-5.776631 -5.764606 -5.776639 -5.772346
Code
checkresiduals(garch(res.jbht, order =c(4,1),trace = F))
Ljung-Box test
data: Residuals
Q* = 7.0853, df = 10, p-value = 0.7174
Model df: 0. Total lags used: 10
The model’s residual plots generally look satisfactory, with only a few notable lags in the ACF plots. The AIC values are relatively low, indicating a good fit. However, it’s worth noting that not all coefficients for the GARCH model were statistically significant, suggesting that some correlations may not be entirely captured by the model. Additionally, the Ljung-Box test results show all p-values above 0.05, indicating that there may not be significant autocorrelation remaining in the residuals.
Best model: ARIMA(3,0,2)+GARCH(1,1)
Code
# returns_jbhtsummary(fit.jbht)
Series: log.jbht
ARIMA(3,0,2) with non-zero mean
Coefficients:
ar1 ar2 ar3 ma1 ma2 mean
-0.6404 0.8330 0.8071 1.6085 0.7635 4.4604
s.e. 0.0703 0.0343 0.0775 0.0759 0.0835 1.2830
sigma^2 = 0.0002703: log likelihood = 9687.24
AIC=-19360.47 AICc=-19360.44 BIC=-19317.15
Training set error measures:
ME RMSE MAE MPE MAPE MASE
Training set 0.0004864017 0.01642744 0.01182566 0.01080249 0.2688679 0.9994574
ACF1
Training set -0.0006273293
Title:
GARCH Modelling
Call:
garchFit(formula = ~garch(1, 1), data = res.jbht, trace = F)
Mean and Variance Equation:
data ~ garch(1, 1)
<environment: 0x14d42f0e8>
[data = res.jbht]
Conditional Distribution:
norm
Coefficient(s):
mu omega alpha1 beta1
5.0221e-04 5.4495e-06 4.3369e-02 9.3621e-01
Std. Errors:
based on Hessian
Error Analysis:
Estimate Std. Error t value Pr(>|t|)
mu 5.022e-04 2.475e-04 2.029 0.0424 *
omega 5.450e-06 1.384e-06 3.938 8.22e-05 ***
alpha1 4.337e-02 6.441e-03 6.733 1.66e-11 ***
beta1 9.362e-01 1.014e-02 92.289 < 2e-16 ***
---
Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
Log Likelihood:
9882.896 normalized: 2.742963
Description:
Sat Apr 27 18:32:05 2024 by user:
Standardised Residuals Tests:
Statistic p-Value
Jarque-Bera Test R Chi^2 1773.2943427 0.0000000
Shapiro-Wilk Test R W 0.9706826 0.0000000
Ljung-Box Test R Q(10) 6.8524567 0.7392967
Ljung-Box Test R Q(15) 10.4572844 0.7900329
Ljung-Box Test R Q(20) 11.9815491 0.9167096
Ljung-Box Test R^2 Q(10) 4.7732784 0.9057995
Ljung-Box Test R^2 Q(15) 10.5959357 0.7806767
Ljung-Box Test R^2 Q(20) 16.3539424 0.6944377
LM Arch Test R TR^2 10.4428505 0.5771698
Information Criterion Statistics:
AIC BIC SIC HQIC
-5.483706 -5.476834 -5.483708 -5.481257
Code
checkresiduals(garch(res.jbht, order =c(1,1),trace = F))
Ljung-Box test
data: Residuals
Q* = 6.7644, df = 10, p-value = 0.7475
Model df: 0. Total lags used: 10
The model’s residual plots generally look satisfactory, with only a few notable lags in the ACF plots. The AIC values are relatively low, indicating a good fit. However, it’s worth noting that not all coefficients for the ARIMA model were statistically significant, suggesting that some correlations may not be entirely captured by the model. Additionally, the Ljung-Box test results show all p-values above 0.05, indicating that there may not be significant autocorrelation remaining in the residuals.
---title: "Financial Time Series Models (ARCH/GARCH)"format: html: code-fold: true embed-resources: true code-tools: trueeditor_options: chunk_output_type: inline---```{r, echo=FALSE,message=FALSE,warning=FALSE}library(tidyverse)library(ggplot2)library(forecast)library(astsa) library(xts)library(tseries)library(fpp2)library(fma)library(lubridate)library(tidyverse)library(TSstudio)library(quantmod)library(tidyquant)library(plotly)library(TSA)library(fGarch) library(dynlm)```## Plotting Data::: panel-tabset### UPS Stock Price```{r}library('quantmod')getSymbols("UPS", from="2010-01-01", src="yahoo")head(UPS)#any(is.na(UPS))ups.close<-Ad(UPS)# candlestick plot of the UPS priceschartSeries(UPS, type ="candlesticks",theme='white')# returns plotreturns_ups =diff(log(ups.close))chartSeries(returns_ups, theme="white")```The data exhibits non-stationarity and displays volatility clustering in the returns, particularly evident around the period from 2020 to 2022.### JBHT Stock Price```{r}library('quantmod')getSymbols("JBHT", from="2010-01-01", src="yahoo")head(JBHT)#any(is.na(UPS))jbht.close<-Ad(JBHT)# candlestick plot of the UPS priceschartSeries(JBHT, type ="candlesticks",theme='white')# returns plotreturns_jbht =diff(log(jbht.close))chartSeries(returns_jbht, theme="white")```The data exhibits non-stationarity and displays volatility clustering in the returns, particularly evident around the period from 2020 to 2022.:::## ACF and PACF Plots::: panel-tabset### UPS Stock Price```{r}# returns_upsggAcf(returns_ups, na.action = na.pass) ggPacf(returns_ups, na.action = na.pass) ```The returns are weakly stationary with little instances of high correlations. p=4, q=4. It needs ARIMA model.```{r}ggAcf(returns_ups^2, na.action = na.pass) ggPacf(returns_ups^2, na.action = na.pass) ```The squared returns show significant correlations at lag orders p=1 to 6 and q=0 to 5. Given this observation, it would be more appropriate to utilize an GARCH model.### JBHT Stock Price```{r}# returns_jbhtggAcf(returns_jbht, na.action = na.pass) ggPacf(returns_jbht, na.action = na.pass) ```The returns are weakly stationary with little instances of high correlations. p=1,4, q=1,4,5. It needs ARIMA model.```{r}ggAcf(returns_jbht^2, na.action = na.pass) ggPacf(returns_jbht^2, na.action = na.pass) ```The squared returns show significant correlations at lag orders p=1 to 6 and q=0 to 5. Given this observation, it would be more appropriate to utilize an GARCH model.:::## Fitting ARIMA::: panel-tabset### UPS Stock Price```{r, warning=FALSE}# returns_upslog.ups <-log(ups.close)ggtsdisplay(diff(log.ups))```The model for “differenced log data” series is thus a white noise, and the “original model” resembles random walk model ARIMA(0,1,0).```{r,message=FALSE,warning=FALSE}######################## Check for different combinations ########d=0i=1temp=data.frame()ls=matrix(rep(NA,6*100),nrow=100) # roughly nrow = 5x5x2for (p in1:6)# p=1,2,{for(q in1:6)# q=1,2, {for(d in0:1)# {if(p-1+d+q-1<=8) { model<-Arima(log.ups,order=c(p-1,d,q-1),include.drift=TRUE) ls[i,]=c(p-1,d,q-1,model$aic,model$bic,model$aicc) i=i+1#print(i) } } }}temp=as.data.frame(ls)names(temp)=c("p","d","q","AIC","BIC","AICc")temp[which.min(temp$AIC),] temp[which.min(temp$BIC),]temp[which.min(temp$AICc),]```The lowest AIC model is ARIMA(5,0,3).```{r}# model diagnosticssarima(log.ups, 0,1,0)sarima(log.ups, 5,0,3)```According to the model diagnostics, ARIMA(0,1,0) is the better model.Upon inspecting the Standardized Residuals plot of the model, it is evident that there is still significant variation or volatility remaining. Further modeling is required to address this issue.### JBHT Stock Price```{r, warning=FALSE}# returns_jbhtlog.jbht <-log(jbht.close)ggtsdisplay(diff(log.jbht))```The model for “differenced log data” series is thus a white noise, and the “original model” resembles random walk model ARIMA(0,1,0).```{r,message=FALSE,warning=FALSE}######################## Check for different combinations ########d=0i=1temp=data.frame()ls=matrix(rep(NA,6*120),nrow=120) # roughly nrow = 5x5x2for (p in1:4)# p=1,2,{for(q in1:4)# q=1,2, {for(d in0:1)# {if(p-1+d+q-1<=8) { model<-Arima(log.jbht,order=c(p-1,d,q-1),include.drift=TRUE) ls[i,]=c(p-1,d,q-1,model$aic,model$bic,model$aicc) i=i+1#print(i) } } }}temp=as.data.frame(ls)names(temp)=c("p","d","q","AIC","BIC","AICc")temp[which.min(temp$AIC),] temp[which.min(temp$BIC),]temp[which.min(temp$AICc),]```The lowest AIC model is ARIMA(3,0,2).```{r}# model diagnosticssarima(log.jbht, 0,1,0)sarima(log.jbht, 3,0,2)```According to the model diagnostics, ARIMA(3,0,2) is the better model.Upon inspecting the Standardized Residuals plot of the model, it is evident that there is still significant variation or volatility remaining. Further modeling is required to address this issue.:::## GARCH Model on ARIMA Residuals::: panel-tabset### UPS Stock Price```{r}# returns_upsfit.ups <-Arima(log.ups, order=c(0,1,0))summary(fit.ups)res.ups <- fit.ups$resggtsdisplay(res.ups^2)```The squared returns show significant correlations at lag orders p=1 to 6 and q=0 to 6. Given this observation, it would be more appropriate to utilize an GARCH model.```{r,warning=FALSE}model <-list() ## set countercc <-1for (p in1:6) {for (q in1:6) {model[[cc]] <-garch(res.ups,order=c(q,p),trace=F)cc <- cc +1}} ## get AIC values for model evaluationGARCH_AIC <-sapply(model, AIC) ## model with lowest AIC is the bestwhich(GARCH_AIC ==min(GARCH_AIC))model[[which(GARCH_AIC ==min(GARCH_AIC))]]``````{r,warning=FALSE}library(fGarch)summary(garchFit(~garch(1,5), res.ups,trace = F))```Alpha2 and beta1,2,3,5 are not significant. So I will try GARCH(1,4) and ARCH(1).```{r,warning=FALSE}summary(garchFit(~garch(1,4), res.ups,trace = F))summary(garchFit(~garch(1,0), res.ups,trace = F))```The best model above with the smallest AIC is GARCH(1,4). So for UPS stock price the best model is ARIMA(0,1,0)+GARCH(1,4).### JBHT Stock Price```{r}# returns_jbhtfit.jbht <-Arima(log.jbht, order=c(3,0,2))summary(fit.jbht)res.jbht <- fit.jbht$resggtsdisplay(res.jbht^2)```The squared returns show significant correlations at lag orders p=1 to 6 and q=0 to 6. Given this observation, it would be more appropriate to utilize an GARCH model.```{r,warning=FALSE}model <-list() ## set countercc <-1for (p in1:6) {for (q in1:6) {model[[cc]] <-garch(res.jbht,order=c(q,p),trace=F)cc <- cc +1}} ## get AIC values for model evaluationGARCH_AIC <-sapply(model, AIC) ## model with lowest AIC is the bestwhich(GARCH_AIC ==min(GARCH_AIC))model[[which(GARCH_AIC ==min(GARCH_AIC))]]``````{r,warning=FALSE}library(fGarch)summary(garchFit(~garch(1,1), res.jbht,trace = F))```The best model above with the smallest AIC is GARCH(1,1). So for JBHT stock price the best model is ARIMA(3,0,2)+GARCH(1,1).:::## Final Model::: panel-tabset### UPS Stock PriceBest model: ARIMA(0,1,0)+GARCH(1,4)```{r,warning=FALSE}# returns_upssummary(fit.ups)summary(garchFit(~garch(1,4), res.ups,trace = F))checkresiduals(garch(res.jbht, order =c(4,1),trace = F))```The model's residual plots generally look satisfactory, with only a few notable lags in the ACF plots. The AIC values are relatively low, indicating a good fit. However, it's worth noting that not all coefficients for the GARCH model were statistically significant, suggesting that some correlations may not be entirely captured by the model. Additionally, the Ljung-Box test results show all p-values above 0.05, indicating that there may not be significant autocorrelation remaining in the residuals.### JBHT Stock PriceBest model: ARIMA(3,0,2)+GARCH(1,1)```{r}# returns_jbhtsummary(fit.jbht)summary(garchFit(~garch(1,1), res.jbht,trace = F))checkresiduals(garch(res.jbht, order =c(1,1),trace = F))```The model's residual plots generally look satisfactory, with only a few notable lags in the ACF plots. The AIC values are relatively low, indicating a good fit. However, it's worth noting that not all coefficients for the ARIMA model were statistically significant, suggesting that some correlations may not be entirely captured by the model. Additionally, the Ljung-Box test results show all p-values above 0.05, indicating that there may not be significant autocorrelation remaining in the residuals.:::## Model Equations::: panel-tabset### UPS Stock Price$\operatorname{ARIMA}(0,1,0)$$$\left(1-B\right) Y_t=\epsilon_t$$$\operatorname{GARCH}(1,4)$$$\sigma_t^2=\omega+\alpha_1 \varepsilon_{t-1}^2+\beta_1 \sigma_{t-1}^2+\beta_2 \sigma_{t-2}^2+\beta_3 \sigma_{t-3}^3+\beta_4 \sigma_{t-4}^4$$### JBHT Stock Price$\operatorname{ARIMA}(3,0,2)$$$\left(1-\phi_1 B-\phi_2 B^2-\phi_3 B^3\right) Y_t=\left(1+\theta_1 B+\theta_2 B^2\right) \epsilon_t$$$\operatorname{GARCH}(1,1)$$$\sigma_t^2=\omega+\alpha_1 \varepsilon_{t-1}^2+\beta_1 \sigma_{t-1}^2$$:::